Approximating Paley-Wiener Functions by Smoothed Step Functions
نویسندگان
چکیده
منابع مشابه
The Trace Paley Wiener Theorem for Schwartz Functions
on 11temp (G(F)) . The object of this note is to characterize the image of the map. Results of this nature are well known. The case of the Hecke algebra on G(F), which is in fact more difficult, was established in [3] and [5]. A variant of the problem for the smooth functions of compact support on a real group was solved in [4]. For the Schwartz space, one has a choice of several possible appro...
متن کاملApproximating Boolean Functions by OBDDs
In learning theory and genetic programming, OBDDs are used to represent approximations of Boolean functions. This motivates the investigation of the OBDD complexity of approximating Boolean functions with respect to given distributions on the inputs. We present a new type of reduction for one–round communication problems that is suitable for approximations. Using this new type of reduction, we ...
متن کاملApproximating Ropelength by Energy Functions
The ropelength of a knot is the quotient of its length by its thickness. We consider a family of energy functions R for knots, depending on a power p, which approach ropelength as p increases. We describe a numerically computed trefoil knot which seems to be a local minimum for ropelength; there are nearby critical points for R, which are evidently local minima for large enough p. 1. Thickness ...
متن کاملSmoothed Action Value Functions
State-action value functions (i.e., Q-values) are ubiquitous in reinforcement learning (RL), giving rise to popular algorithms such as SARSA and Q-learning. We propose a new notion of action value defined by a Gaussian smoothed version of the expected Q-value. We show that such smoothed Q-values still satisfy a Bellman equation, making them learnable from experience sampled from an environment....
متن کاملSmoothed Action Value Functions
State-action value functions (i.e., Q-values) are ubiquitous in reinforcement learning (RL), giving rise to popular algorithms such as SARSA and Q-learning. We propose a new notion of action value defined by a Gaussian smoothed version of the expected Q-value. We show that such smoothed Q-values still satisfy a Bellman equation, making them learnable from experience sampled from an environment....
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Approximation Theory
سال: 1994
ISSN: 0021-9045
DOI: 10.1006/jath.1994.1088